141 research outputs found

    End-to-End Error-Correcting Codes on Networks with Worst-Case Symbol Errors

    Full text link
    The problem of coding for networks experiencing worst-case symbol errors is considered. We argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. A new transform metric for errors under the considered model is proposed. Using this metric, we replicate many of the classical results from coding theory. Specifically, we prove new Hamming-type, Plotkin-type, and Elias-Bassalygo-type upper bounds on the network capacity. A commensurate lower bound is shown based on Gilbert-Varshamov-type codes for error-correction. The GV codes used to attain the lower bound can be non-coherent, that is, they do not require prior knowledge of the network topology. We also propose a computationally-efficient concatenation scheme. The rate achieved by our concatenated codes is characterized by a Zyablov-type lower bound. We provide a generalized minimum-distance decoding algorithm which decodes up to half the minimum distance of the concatenated codes. The end-to-end nature of our design enables our codes to be overlaid on the classical distributed random linear network codes [1]. Furthermore, the potentially intensive computation at internal nodes for the link-by-link error-correction is un-necessary based on our design.Comment: Submitted for publication. arXiv admin note: substantial text overlap with arXiv:1108.239

    Communication and distributional complexity of joint probability mass functions

    Get PDF
    The problem of truly-lossless (Pe = 0) distributed source coding [1] requires knowledge of the joint statistics of the sources. In particular the locations of the zeroes of the probability mass functions (pmfs) are crucial for encoding at rates below (H(X),H(Y)) [2]. We consider the distributed computation of the empirical joint pmf Pn of a sequence of random variable pairs observed at physically separated nodes of a network. We consider both worst-case and average measures of information exchange and treat both exact calculation of Pn and a notion of approximation. We find that in all cases the communication cost grows linearly with the size of the input. Further, we consider the problem of determining whether the empirical pmf has a zero in a particular location and show that in most cases considered this also requires a communication cost that is linear in the input size

    Communication over an Arbitrarily Varying Channel under a State-Myopic Encoder

    Full text link
    We study the problem of communication over a discrete arbitrarily varying channel (AVC) when a noisy version of the state is known non-causally at the encoder. The state is chosen by an adversary which knows the coding scheme. A state-myopic encoder observes this state non-causally, though imperfectly, through a noisy discrete memoryless channel (DMC). We first characterize the capacity of this state-dependent channel when the encoder-decoder share randomness unknown to the adversary, i.e., the randomized coding capacity. Next, we show that when only the encoder is allowed to randomize, the capacity remains unchanged when positive. Interesting and well-known special cases of the state-myopic encoder model are also presented.Comment: 16 page

    The Capacity of Online (Causal) qq-ary Error-Erasure Channels

    Full text link
    In the qq-ary online (or "causal") channel coding model, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x1,,xn){0,1,,q1}n\mathbf{x} =(x_1,\ldots,x_n) \in \{0,1,\ldots,q-1\}^n symbol by symbol via a channel limited to at most pnpn errors and/or pnp^{*} n erasures. The channel is "online" in the sense that at the iith step of communication the channel decides whether to corrupt the iith symbol or not based on its view so far, i.e., its decision depends only on the transmitted symbols (x1,,xi)(x_1,\ldots,x_i). This is in contrast to the classical adversarial channel in which the corruption is chosen by a channel that has a full knowledge on the sent codeword x\mathbf{x}. In this work we study the capacity of qq-ary online channels for a combined corruption model, in which the channel may impose at most pnpn {\em errors} and at most pnp^{*} n {\em erasures} on the transmitted codeword. The online channel (in both the error and erasure case) has seen a number of recent studies which present both upper and lower bounds on its capacity. In this work, we give a full characterization of the capacity as a function of q,pq,p, and pp^{*}.Comment: This is a new version of the binary case, which can be found at arXiv:1412.637

    Learning Immune-Defectives Graph through Group Tests

    Full text link
    This paper deals with an abstraction of a unified problem of drug discovery and pathogen identification. Pathogen identification involves identification of disease-causing biomolecules. Drug discovery involves finding chemical compounds, called lead compounds, that bind to pathogenic proteins and eventually inhibit the function of the protein. In this paper, the lead compounds are abstracted as inhibitors, pathogenic proteins as defectives, and the mixture of "ineffective" chemical compounds and non-pathogenic proteins as normal items. A defective could be immune to the presence of an inhibitor in a test. So, a test containing a defective is positive iff it does not contain its "associated" inhibitor. The goal of this paper is to identify the defectives, inhibitors, and their "associations" with high probability, or in other words, learn the Immune Defectives Graph (IDG) efficiently through group tests. We propose a probabilistic non-adaptive pooling design, a probabilistic two-stage adaptive pooling design and decoding algorithms for learning the IDG. For the two-stage adaptive-pooling design, we show that the sample complexity of the number of tests required to guarantee recovery of the inhibitors, defectives, and their associations with high probability, i.e., the upper bound, exceeds the proposed lower bound by a logarithmic multiplicative factor in the number of items. For the non-adaptive pooling design too, we show that the upper bound exceeds the proposed lower bound by at most a logarithmic multiplicative factor in the number of items.Comment: Double column, 17 pages. Updated with tighter lower bounds and other minor edit

    Zero Error Coordination

    Full text link
    In this paper, we consider a zero error coordination problem wherein the nodes of a network exchange messages to be able to perfectly coordinate their actions with the individual observations of each other. While previous works on coordination commonly assume an asymptotically vanishing error, we assume exact, zero error coordination. Furthermore, unlike previous works that employ the empirical or strong notions of coordination, we define and use a notion of set coordination. This notion of coordination bears similarities with the empirical notion of coordination. We observe that set coordination, in its special case of two nodes with a one-way communication link is equivalent with the "Hide and Seek" source coding problem of McEliece and Posner. The Hide and Seek problem has known intimate connections with graph entropy, rate distortion theory, Renyi mutual information and even error exponents. Other special cases of the set coordination problem relate to Witsenhausen's zero error rate and the distributed computation problem. These connections motivate a better understanding of set coordination, its connections with empirical coordination, and its study in more general setups. This paper takes a first step in this direction by proving new results for two node networks

    Analog network coding in general SNR regime: Performance of a greedy scheme

    Full text link
    The problem of maximum rate achievable with analog network coding for a unicast communication over a layered relay network with directed links is considered. A relay node performing analog network coding scales and forwards the signals received at its input. Recently this problem has been considered under certain assumptions on per node scaling factor and received SNR. Previously, we established a result that allows us to characterize the optimal performance of analog network coding in network scenarios beyond those that can be analyzed using the approaches based on such assumptions. The key contribution of this work is a scheme to greedily compute a lower bound to the optimal rate achievable with analog network coding in the general layered networks. This scheme allows for exact computation of the optimal achievable rates in a wider class of layered networks than those that can be addressed using existing approaches. For the specific case of Gaussian N-relay diamond network, to the best of our knowledge, the proposed scheme provides the first exact characterization of the optimal rate achievable with analog network coding. Further, for general layered networks, our scheme allows us to compute optimal rates within a constant gap from the cut-set upper bound asymptotically in the source power.Comment: 11 pages, 5 figures. Fixed an issue with the notation in the statement and proof of Lemma 1. arXiv admin note: substantial text overlap with arXiv:1204.2150 and arXiv:1202.037

    Concatenated Polar Codes

    Get PDF
    Polar codes have attracted much recent attention as the first codes with low computational complexity that provably achieve optimal rate-regions for a large class of information-theoretic problems. One significant drawback, however, is that for current constructions the probability of error decays sub-exponentially in the block-length (more detailed designs improve the probability of error at the cost of significantly increased computational complexity \cite{KorUS09}). In this work we show how the the classical idea of code concatenation -- using "short" polar codes as inner codes and a "high-rate" Reed-Solomon code as the outer code -- results in substantially improved performance. In particular, code concatenation with a careful choice of parameters boosts the rate of decay of the probability of error to almost exponential in the block-length with essentially no loss in computational complexity. We demonstrate such performance improvements for three sets of information-theoretic problems -- a classical point-to-point channel coding problem, a class of multiple-input multiple output channel coding problems, and some network source coding problems

    Amplify-and-Forward in Wireless Relay Networks

    Full text link
    A general class of wireless relay networks with a single source-destination pair is considered. Intermediate nodes in the network employ an amplify-and-forward scheme to relay their input signals. In this case the overall input-output channel from the source via the relays to the destination effectively behaves as an intersymbol interference channel with colored noise. Unlike previous work we formulate the problem of the maximum achievable rate in this setting as an optimization problem with no assumption on the network size, topology, and received signal-to-noise ratio. Previous work considered only scenarios wherein relays use all their power to amplify their received signals. We demonstrate that this may not always maximize the maximal achievable rate in amplify-and-forward relay networks. The proposed formulation allows us to not only recover known results on the performance of the amplify-and-forward schemes for some simple relay networks but also characterize the performance of more complex amplify-and-forward relay networks which cannot be addressed in a straightforward manner using existing approaches. Using cut-set arguments, we derive simple upper bounds on the capacity of general wireless relay networks. Through various examples, we show that a large class of amplify-and-forward relay networks can achieve rates within a constant factor of these upper bounds asymptotically in network parameters.Comment: Minor revision: fixed a typo in eqn. reference, changed the formatting. 30 pages, 8 figure
    corecore